Skip to content

[REVIEW] Add OpenSearch backend to cuvs-bench#2012

Open
jrbourbeau wants to merge 8 commits intorapidsai:mainfrom
jrbourbeau:cuvs-bench-opensearch2
Open

[REVIEW] Add OpenSearch backend to cuvs-bench#2012
jrbourbeau wants to merge 8 commits intorapidsai:mainfrom
jrbourbeau:cuvs-bench-opensearch2

Conversation

@jrbourbeau
Copy link
Copy Markdown
Member

@jrbourbeau jrbourbeau commented Apr 10, 2026

Would still like to refine a bit but pushing up for early feedback

cc @singhmanas1 @janakivamaraju @afourniernv

xref #1907 which does something similar but for Elastic

EDIT: Okay, this is now ready for review (cc @jnke2016)

Signed-off-by: James Bourbeau <jbourbeau@nvidia.com>
print(
f"Build failed for {config.index_name}: {build_result.error_message}"
try:
backend.initialize()
Copy link
Copy Markdown
Member Author

@jrbourbeau jrbourbeau Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to reviewers: This is a separate, but related change to the orchestrator. I noticed the initialize and cleanup methods weren't actually being called like their docstrings said they were suppose to. This is a big diff but most of it is just intending over the existing code into a try/except block. The functional change is calling initialize + cleanup

EDIT: FWIW supporting a context manager for backend setup / teardown seems reasonable. But that's also not something I wanted to handle in this PR.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. The initialize() and cleanup() hooks in BenchmarkBackend are defined in the abstract base class with docstrings describing their intended purpose, but neither method is actually called by the BenchmarkOrchestrator in _run_sweep() or _run_trial(). This was an oversight because the only backend implemented at the time was CppGoogleBenchmarkBackend, whose lifecycle is entirely self-contained within each subprocess call to build()/search(), making initialize()/cleanup() unnecessary for it.

Copy link
Copy Markdown
Contributor

@jnke2016 jnke2016 Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EDIT: FWIW supporting a context manager for backend setup / teardown seems reasonable. But that's also not something I wanted to handle in this PR.

I Agree that context manager support is reasonable. The purpose of the pluggable backend API is precisely to enable these kinds of extensions without modifying the core orchestrator. Since initialize() and cleanup() are already part of the BenchmarkBackend interface, wiring them up in the orchestrator (whether as explicit calls or a context manager) would ensure the plugin system works end-to-end for future backends without requiring further modifications to the core

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I think we're leaning a little too strongly on the local (individual backend) vs global (benchmark orchestrator) control and we should probably prioritize global control for all boilerplate things so that we're not having to reimplement them in each backend. I'm okay doing this in follow-ups, but the way we typically handle these types of things is to create a github issue that clearly explains the intended changes for the follow-up (explain it in a way that assumes someone else might have to pick it up in the future, just in case) and then reference a link to the issue in a todo comment in the code where the change is intended to be made. This makes sure 1) people reading the code are aware of the intentions, and 2) the issue doesn't get orphaned and forgotten.


[tool.pytest.ini_options]
markers = [
"integration: tests that require a live OpenSearch node (run with '-m integration')",
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: Make this more generic language (not just opensearch)

Signed-off-by: James Bourbeau <jbourbeau@nvidia.com>
…ch-opensearch

Signed-off-by: James Bourbeau <jbourbeau@nvidia.com>
Signed-off-by: James Bourbeau <jbourbeau@nvidia.com>
…ch-opensearch

Signed-off-by: James Bourbeau <jbourbeau@nvidia.com>
@jrbourbeau jrbourbeau changed the title [WIP] Add OpenSearch backend to cuvs-bench [REVIEW] Add OpenSearch backend to cuvs-bench Apr 13, 2026
@jnke2016 jnke2016 self-requested a review April 20, 2026 18:43
Copy link
Copy Markdown
Contributor

@jnke2016 jnke2016 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks good me.


ext = os.path.splitext(path)[1].lower()
dtype = _DTYPE_FOR_EXT.get(ext, np.float32)
with open(path, "rb") as f:
Copy link
Copy Markdown
Member

@cjnolet cjnolet Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mentioned this in the elasticsearch PR as well- we should standardize/centralize these calls so that we're not having to manually implement them for every backend. The benchmark should be opening the files and passing the vectors or the file handle) down to the actual backends.

@jnke2016

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't disagree with this comment (or the other ones about consolidating various utilities). IIUC it sounds like these will need deeper changes to the config loader / orchestrator specs. Is that something that's needed in this PR, or is a follow up to refactor fine?

Copy link
Copy Markdown
Contributor

@jnke2016 jnke2016 Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since both backends(elastic search and OpenSearch) are separately implementing this, a follow up PR consolidating both might be easier especially since both PRs are still open. Otherwise I can push a PR consolidating those various utilities (general purpose) based on both PRs which they can pull after it is merged

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jrbourbeau
Loading vectors in the orchestrator or config loader isn't the right approach because the C++ backend never loads vectors into Python. It only passes file paths to the C++ subprocess. Loading vectors at the orchestrator level would waste memory for the C++ backend and any future backend that operates the same way, so this doesn't require changes to the config loader or orchestrator specs.

There are two options to centralize this:

Option 1: Shared utility module. Create a backends/utils.py with a single load_vectors() function that handles all supported binary formats (.fbin, .f16bin, .u8bin, .i8bin, .ibin) and supports subset_size. Python-native backends like OpenSearch and Elasticsearch import and call it when they need vectors. The C++ backend, orchestrator, and config loader are completely untouched. This is the simpler approach with no core changes, though it relies on future backend authors knowing to use it.

Option 2: Lazy loading on Dataset. Make base_vectors, query_vectors, and groundtruth_neighbors lazy properties on the Dataset class that load from the corresponding file paths on first access. The C++ backend never accesses these properties (it uses file paths), so nothing loads. Python-native backends just access dataset.base_vectors and get loaded vectors automatically without needing to know about any utility or handle the empty-array fallback pattern. This prevents the duplication from recurring since future backend authors can't accidentally reimplement it, but it requires changing Dataset from a plain dataclass to a regular class.

I'd suggest we go with option 1 for now since it solves the problem without touching core types, and we can move to option 2 later if the pattern keeps getting reimplemented

return data.reshape(n_rows, n_cols)


def _load_yaml(path: str) -> Any:
Copy link
Copy Markdown
Member

@cjnolet cjnolet Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This and the dataset config should also be getting loaded by the benchmark itself and not the backend. @jnke2016



def _gather_algo_configs(
config_path: str, algorithm_configuration: Optional[str]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to centralize this @jnke2016

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Functions like _gather_algo_configs, _load_yaml, and _get_dataset_conf are shared config-loading logic that every backend ends up reimplementing. @afourniernv - The Elasticsearch PR has the same pattern. The C++ loader already has equivalent methods (load_yaml_file, get_dataset_configuration, gather_algorithm_configs) as instance methods. I'll promote those to the base ConfigLoader class so all config loaders inherit them automatically and developers don't need to reimplement them. The backend should only be responsible for its backend-specific concerns (connection parameters, index mappings, API calls), not for loading YAML files or finding datasets by name.

Copy link
Copy Markdown
Contributor

@jnke2016 jnke2016 Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though the load() method is abstract because each backend has different config-loading logic, helper methods like load_yaml_file and get_dataset_configuration are common operations that every config loader reimplements when implementing load(). gather_algorithm_configs is also shared between the C++ and OpenSearch loaders.


benchmark_configs: List[BenchmarkConfig] = []

for algo_file in algo_files:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is all boilerplate- exactly why this should be centralized. The resulting config dicts should be loaded from the files by the benchmark and passed down to the individual backends. It's important that this is standardized across benchmarks so that we don't have specific backends doing things drastically different than others (that just makes it super complicated for everyone. But also, if we find a bug in a spcific loader or decide to change how we do things generally, every single backend then needs to be updated individually instead of just changing it one place ) @jnke2016

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the loading pipeline (loading dataset YAML, gathering algorithm files, filtering by algorithm/group names etc ..) is a shared logic that should be centralized in the base ConfigLoader instead of having each backend implementing this.

method_params = {"nlist": build_param.get("nlist", 4)}
else:
raise ValueError(
f"Unsupported faiss_method {faiss_method!r}. "
Copy link
Copy Markdown
Member

@cjnolet cjnolet Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! This is important. I believe i suggested that this be done in the eladticserch PR as well. Need to fail gracefully when the user configures somethign that's not supported (or loads the wrong file).

@cjnolet
Copy link
Copy Markdown
Member

cjnolet commented Apr 20, 2026

cc @navneet1v for thoughts / feedback on the opensearch backend for cuvs-bench.

@cjnolet cjnolet added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels Apr 20, 2026
@cjnolet cjnolet moved this to In Progress in Unstructured Data Processing Apr 20, 2026
Signed-off-by: James Bourbeau <jbourbeau@nvidia.com>
Signed-off-by: James Bourbeau <jbourbeau@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

3 participants